跳到主要内容

3.2.x 性能

Performance:

0. Env

Disk: 2T NVME

CPU: Intel(R) Xeon(R) CPU E5-2630 v4 @ 2.20GHz * 40

Memory: 256G

Network Card: 10-Gigabit

OS: CentOS Linux release 7.4

1. Single Db

1.1 Write binlog with one slave

data size: 64bytes

key num: 1,000,000

TESTQPS
set124347
get283849

1.2 No binlog No slave

singleDb

1.3 Benchmark Result

WithBinlog&Slave QPSNoBinlog&Slave QPS
PING_INLINE262329272479
PING_BULK262467270562
SET124953211327
GET284900292568
INCR120004213766
MSET (10 keys)64863111578
MGET (10 keys)224416223513
MGET (100 keys)2993529550
MGET (200 keys)1512814912
LPUSH117799205380
RPUSH117481205212
LPOP112120200320
RPOP119932207986
LRANGE_10 (first 10 elements)277932284414
LRANGE_100 (first 100 elements)165118164355
LRANGE_300 (first 300 elements)5490755096
LRANGE_450 (first 450 elements)3665636630
LRANGE_600 (first 600 elements)2754027510
SADD126230208768
SPOP103135166555
HSET122443214362
HINCRBY114757208942
HINCRBYFLOAT114377208550
HGET284900290951
HMSET (10 fields)58937111445
HMGET (10 fields)203624205592
HGETALL166861160797
ZADD106780189178
ZREM112866201938
PFADD47084692
PFCOUNT2741227345
PFMERGE478494

1.4 Compare Wiht Redis

With Redis AOF configuration appendfsync everysec, redis basically write data into memeory. However, pika uses rocksdb, which writes WAL on ssd druing every write batch. That comparation becomes multiple threads sequential write into ssd and one thread write into memory.

Put the fairness or unfairness aside. Here is the performance.

2. Cluster (Codis)

2.1 Topology:

WithBInlog&Slave

4 machine * 2 pika instance (Master)

4 machine * 2 pika instance (Slave)

NoBinlog&Slave

4 machine * 2 pika instance (Master)

Slots Distribution:

2.2 WithBinlog&Slave

Set

CommandQPS
Set1,400,000+

2.3 NoBinlog&Slave

CommandQPS
Set1,600,000+

2.4 Get Command

CommandQPS
Get2,300,000+

With or without binlog, for Get command, QPS is approximately the same.